iterations, x, r_expected, x - r_expected, x - x0);
}
while (status == GSL_CONTINUE && iterations < max_iterations);
}
Here are the results for Newton's method,
bash$ ./a.out
using newton method
iter root actual err err(est)
1 3.0000000 2.2360680 +0.7639320 -2.0000000
2 2.3333333 2.2360680 +0.0972654 -0.6666667
3 2.2380952 2.2360680 +0.0020273 -0.0952381
Converged:
4 2.2360689 2.2360680 +0.0000009 -0.0020263
Note that the error can be estimated more accurately by taking the
difference between the current iterate and next iterate rather than the
previous iterate. The other derivative solvers can be investigated by
changing `gsl_root_fdfsolver_newton' to `gsl_root_fdfsolver_secant' or
`gsl_root_fdfsolver_steffenson'.
File: gsl-ref.info, Node: Root Finding References and Further Reading, Prev: Root Finding Examples, Up: One dimensional Root-Finding
References and Further Reading
==============================
For information on the Brent-Dekker algorithm see the following two
papers,
R. P. Brent, "An algorithm with guaranteed convergence for finding
a zero of a function", `Computer Journal', 14 (1971) 422-425
J. C. P. Bus and T. J. Dekker, "Two Efficient Algorithms with
Guaranteed Convergence for Finding a Zero of a Function", `ACM
Transactions of Mathematical Software', Vol. 1 No. 4 (1975) 330-345
File: gsl-ref.info, Node: Multidimensional Root-Finding, Next: Minimization, Prev: One dimensional Root-Finding, Up: Top
Multidimensional Root-Finding
*****************************
This chapter describes functions for multidimensional root-finding
(solving nonlinear systems with n equations in n unknowns). The
library provides low level components for a variety of iterative
solvers and convergence tests. These can be combined by the user to
achieve the desired solution, with full access to the intermediate
steps of the iteration. Each class of methods uses the same framework,
so that you can switch between solvers at runtime without needing to
recompile your program. Each instance of a solver keeps track of its
own state, allowing the solvers to be used in multi-threaded programs.
The header file `gsl_multiroots.h' contains prototypes for the
multidimensional root finding functions and related declarations.
* Menu:
* Overview of Multidimensional Root Finding::
* Initializing the Multidimensional Solver::
* Providing the multidimensional system of equations to solve::
* Iteration of the multidimensional solver::
* Search Stopping Parameters for the multidimensional solver::
* Algorithms using Derivatives::
* Algorithms without Derivatives::
* Example programs for Multidimensional Root finding::
* References and Further Reading for Multidimensional Root Finding::
File: gsl-ref.info, Node: Overview of Multidimensional Root Finding, Next: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding
Overview
========
The problem of multidimensional root finding requires the
simultaneous solution of n equations, f_i, in n variables, x_i,
f_i (x_1, ..., x_n) = 0 for i = 1 ... n.
In general there are no bracketing methods available for n dimensional
systems, and no way of knowing whether any solutions exist. All
algorithms proceed from an initial guess using a variant of the Newton
iteration,
x -> x' = x - J^{-1} f(x)
where x, f are vector quantities and J is the Jacobian matrix J_{ij} =
d f_i / d x_j. Additional strategies are also used to enlarge the
region of convergence. These include requiring a decrease in the norm
|f| on each step proposed by Newton's method, or taking downwards steps
in the direction of the negative gradient of |f|.
The evaluation of the Jacobian matrix can be problematic, either
because programming the derivatives is intractable or because
computation of the n^2 terms of the matrix becomes too expensive. For
these reasons the algorithms provided by the library are divided into
two classes according to whether the derivatives are available or not.
The state for solvers with an analytic Jacobian matrix is held in a
`gsl_multiroot_fdfsolver' struct. The updating procedure requires both
the function and its derivatives to be supplied by the user.
The state for solvers which do not use an analytic Jacobian matrix is
held in a `gsl_multiroot_fsolver' struct. The updating procedure uses
only function evaluations (not derivatives). The algorithms estimate
the matrix J or J^{-1} by approximate methods.
File: gsl-ref.info, Node: Initializing the Multidimensional Solver, Next: Providing the multidimensional system of equations to solve, Prev: Overview of Multidimensional Root Finding, Up: Multidimensional Root-Finding
These functions return a pointer to the name of the solver. For
example,
printf("s is a '%s' solver\n", gsl_multiroot_fdfsolver_name (s)) ;
would print something like `s is a 'newton' solver'
File: gsl-ref.info, Node: Providing the multidimensional system of equations to solve, Next: Iteration of the multidimensional solver, Prev: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding
Providing the function to solve
===============================
You must provide n functions of n variables for the root finders to
operate on. In order to allow for general parameters the functions are
defined by the following data types:
- Data Type: gsl_multiroot_function
This data type defines a general system of functions with
Note that the function `powell_fdf' is able to reuse existing terms
from the function when calculating the Jacobian, thus saving time.
File: gsl-ref.info, Node: Iteration of the multidimensional solver, Next: Search Stopping Parameters for the multidimensional solver, Prev: Providing the multidimensional system of equations to solve, Up: Multidimensional Root-Finding
Iteration
=========
The following functions drive the iteration of each algorithm. Each
function performs one iteration to update the state of any solver of the
corresponding type. The same functions work for all solvers so that
different methods can be substituted at runtime without modifications to
the code.
- Function: int gsl_multiroot_fsolver_iterate (gsl_multiroot_fsolver *
S)
- Function: int gsl_multiroot_fdfsolver_iterate
(gsl_multiroot_fdfsolver * S)
These functions perform a single iteration of the solver S. If the
iteration encounters an unexpected problem then an error code will
be returned,
`GSL_EBADFUNC'
the iteration encountered a singular point where the function
or its derivative evaluated to `Inf' or `NaN'.
`GSL_ENOPROG'
the iteration is not making any progress, preventing the
algorithm from continuing.
The solver maintains a current best estimate of the root at all
times. The bracketing solvers also keep track of the current best
interval bounding the root. This information can be accessed with the
These functions return the current estimate of the root for the
solver S.
File: gsl-ref.info, Node: Search Stopping Parameters for the multidimensional solver, Next: Algorithms using Derivatives, Prev: Iteration of the multidimensional solver, Up: Multidimensional Root-Finding
Search Stopping Parameters
==========================
A root finding procedure should stop when one of the following
conditions is true:
* A multidimensional root has been found to within the
user-specified precision.
* A user-specified maximum number of iterations has executed.
* An error has occurred.
In the multidimensional root finding framework of GSL the handling of
these conditions is under user control. The functions below allow the
user to test the precision of the current result in several standard
ways.
- Function: int gsl_multiroot_test_delta (const gsl_vector * DX, const
gsl_vector * X, double EPSABS, double EPSREL)
This function tests for the convergence of the sequence by
comparing the last step DX with the absolute error EPSABS and
relative error EPSREL to the current position X. The test returns
`GSL_SUCCESS' if the following condition is achieved,
|dx_i| < epsabs + epsrel |x_i|
for each component of X and returns `GSL_CONTINUE' otherwise.
- Function: int gsl_multiroot_test_residual (const gsl_vector * F,
double EPSABS)
This function tests the residual value F against the absolute
error bound EPSABS. The test returns `GSL_SUCCESS' if the
following condition is achieved,
\sum_i |f_i| < epsabs
and returns `GSL_CONTINUE' otherwise. This criterion is suitable
for situations where the the precise location of the root, x, is
unimportant provided a value can be found where the residual is
small enough.
File: gsl-ref.info, Node: Algorithms using Derivatives, Next: Algorithms without Derivatives, Prev: Search Stopping Parameters for the multidimensional solver, Up: Multidimensional Root-Finding
Algorithms using Derivatives
============================
The root finding algorithms described in this section make use of
both the function and its derivative. They require an initial guess
for the location of the root, but there is no absolute guarantee of
convergence - the function must be suitable for this technique and the
initial guess must be sufficiently close to the root for it to work.
When the conditions are satisfied then convergence is quadratic.
This is a modified version of Newtons method which attempts to
improve global convergence by requiring every step to reduce the
Euclidean norm of the residual, |f(x)|. If the Newton step leads
to an increase in the norm then a reduced step of relative size
t = (\sqrt(1 + 6 r) - 1) / (3 r)
is proposed, with r being the ratio of norms |f(x')|^2/|f(x)|^2.
This procedure is repeated until a suitable step size is found.
File: gsl-ref.info, Node: Algorithms without Derivatives, Next: Example programs for Multidimensional Root finding, Prev: Algorithms using Derivatives, Up: Multidimensional Root-Finding
Algorithms without Derivatives
==============================
The algorithms described in this section do not require any
derivative information to be supplied by the user. Any derivatives
needed are approximated from by finite difference.
- Solver: gsl_multiroot_fsolver_hybrids
This is a version of the Hybrid algorithm which replaces calls to
the Jacobian function by its finite difference approximation. The
finite difference approximation is computed using
`gsl_multiroots_fdjac' with a relative step size of
`GSL_SQRT_DBL_EPSILON'.
- Solver: gsl_multiroot_fsolver_hybrid
This is a finite difference version of the Hybrid algorithm without
internal scaling.
- Solver: gsl_multiroot_fsolver_dnewton
The "discrete Newton algorithm" is the simplest method of solving a
multidimensional system. It uses the Newton iteration
x -> x - J^{-1} f(x)
where the Jacobian matrix J is approximated by taking finite
differences of the function F. The approximation scheme used by
this implementation is,
J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j
where \delta_j is a step of size \sqrt\epsilon |x_j| with \epsilon
being the machine precision (\epsilon \approx 2.22 \times 10^-16).
The order of convergence of Newton's algorithm is quadratic, but
the finite differences require n^2 function evaluations on each
iteration. The algorithm may become unstable if the finite
differences are not a good approximation to the true derivatives.
- Solver: gsl_multiroot_fsolver_broyden
The "Broyden algorithm" is a version of the discrete Newton
algorithm which attempts to avoids the expensive update of the
Jacobian matrix on each iteration. The changes to the Jacobian
where the vectors dx and df are the changes in x and f. On the
first iteration the inverse Jacobian is estimated using finite
differences, as in the discrete Newton algorithm.
This approximation gives a fast update but is unreliable if the
changes are not small, and the estimate of the inverse Jacobian
becomes worse as time passes. The algorithm has a tendency to
become unstable unless it starts close to the root. The Jacobian
is refreshed if this instability is detected (consult the source
for details).
This algorithm is not recommended for real work and is included
only for pedagogical purposes.
File: gsl-ref.info, Node: Example programs for Multidimensional Root finding, Next: References and Further Reading for Multidimensional Root Finding, Prev: Algorithms without Derivatives, Up: Multidimensional Root-Finding
Examples
========
The multidimensional solvers are used in a similar way to the
one-dimensional root finding algorithms. This first example
demonstrates the `hybrids' scaled-hybrid algorithm, which does not
require derivatives. The program solves the Rosenbrock system of
equations
f_1 (x, y) = a (1 - x)
f_2 (x, y) = b (y - x^2)
with a = 1, b = 10. The solution of this system lies at (x,y) = (1,1)
in a narrow valley.
The first stage of the program is to define the system of equations,
The main program now makes calls to the corresponding `fdfsolver'
versions of the functions,
int
main (void)
{
const gsl_multiroot_fdfsolver_type *T;
gsl_multiroot_fdfsolver *s;
int status;
size_t i, iter = 0;
const size_t n = 2;
struct rparams p = {1.0, 10.0};
gsl_multiroot_function_fdf f = {&rosenbrock_f,
&rosenbrock_df,
&rosenbrock_fdf, n, &p};
double x_init[2] = {-10.0, -5.0};
gsl_vector *x = gsl_vector_alloc (n);
gsl_vector_set (x, 0, x_init[0]);
gsl_vector_set (x, 1, x_init[1]);
T = gsl_multiroot_fdfsolver_gnewton;
s = gsl_multiroot_fdfsolver_alloc (T, &f, x);
print_state (iter, s);
do
{
iter++;
status = gsl_multiroot_fdfsolver_iterate (s);
print_state (iter, s);
if (status)
break;
status = gsl_multiroot_test_residual (s->f, 0.0000001);
}
while (status == GSL_CONTINUE && iter < 1000);
printf ("status = %s\n", gsl_strerror (status));
gsl_multiroot_fdfsolver_free (s);
gsl_vector_free (x);
}
The addition of derivative information to the `hybrids' solver does not
make any significant difference to its behavior, since it able to
approximate the Jacobian numerically with sufficient accuracy. To
illustrate the behavior of a different derivative solver we switch to
`gnewton'. This is a traditional newton solver with the constraint that
it scales back its step if the full step would lead "uphill". Here is
the output for the `gnewton' algorithm,
iter = 0 x = -10.00000000 -5.00000000 f(x) = 1.100e+01 -1.050e+03
iter = 1 x = -4.23051697 -65.31732261 f(x) = 5.231e+00 -8.321e+02
iter = 2 x = 1.00000000 -26.35830775 f(x) = -8.882e-16 -2.736e+02
iter = 3 x = 1.00000000 1.00000000 f(x) = -2.220e-16 -4.441e-15
status = success
The convergence is much more rapid, but takes a wide excursion out to
the point (-4.23,-65.3). This could lead to algorithm going astray in a
realistic application. The hybrid algorithm follows the downhill path
to the solution more reliably.
File: gsl-ref.info, Node: References and Further Reading for Multidimensional Root Finding, Prev: Example programs for Multidimensional Root finding, Up: Multidimensional Root-Finding
References and Further Reading
==============================
The original version of the Hybrid method is described in the
following articles by Powell,
M.J.D. Powell, "A Hybrid Method for Nonlinear Equations" (Chap 6, p
87-114) and "A Fortran Subroutine for Solving systems of Nonlinear
Algebraic Equations" (Chap 7, p 115-161), in `Numerical Methods for
Nonlinear Algebraic Equations', P. Rabinowitz, editor. Gordon and
Breach, 1970.
The following papers are also relevant to the algorithms described in
this section,
J.J. More', M.Y. Cosnard, "Numerical Solution of Nonlinear
Equations", `ACM Transactions on Mathematical Software', Vol 5, No
1, (1979), p 64-85
C.G. Broyden, "A Class of Methods for Solving Nonlinear
Simultaneous Equations", `Mathematics of Computation', Vol 19